Technion Students Expose Flaw in Google’s AI Protocol
Three second-year students exposed vulnerabilities in Google’s A2A system that allow data theft, malicious code injection, and full control of AI agents — findings to be presented at SecTor 2025
Three second-year undergraduate students from the Henry and Marilyn Taub Faculty of Computer Science — Shaked Adi, Dvir Elshich, and Adar Peleg — have discovered a critical vulnerability in Google’s brand-new communication protocol for AI agents, A2A. They will present their findings in October at SecTor 2025, a leading international cybersecurity conference held in Canada as part of Black Hat.
The students conducted the research in the ATLAS AI research lab under the supervision of Prof. Avi Mendelson, Rom Himmelstein, Amit Levi, and Stav Cohen. Remarkably, they made the discovery within just three months of joining the lab.
The vulnerability in Google’s A2A protocol enables external users to:
- Steal data
- Plant malicious code
- Take control of AI agents
Since A2A was launched only a month ago, the discovery highlights both the risks inherent in AI systems and the vital role of academic research in safeguarding the digital world. The students have reported their findings to Google, noting that this is not the only weakness in the protocol. They plan to continue their research in this domain.